30 research outputs found

    Calibration-free Text Entry Using Smooth Pursuit Eye Movements

    Get PDF
    In this paper, we propose a calibration-free gaze-based text entry system that uses smooth pursuit eye movements. We report on our implementation, which improves over prior work on smooth pursuit text entry by 1) eliminating the need of calibration using motion correlation, 2) increasing input rate from 3.34 to 3.41 words per minute, 3) featuring text suggestions that were trained on 10,000 lexicon sentences recommended in the literature. We report on a user study (N=26) which shows that users are able to eye type at 3.41 words per minutes without calibration and without user training. Qualitative feedback also indicates that users positively perceive the system. Our work is of particular benefit for disabled users and for situations when voice and tactile input are not feasible (e.g., in noisy environments or when the hands are occupied)

    Just Gaze and Wave: Exploring the Use of Gaze and Gestures for Shoulder-surfing Resilient Authentication

    Get PDF
    Eye-gaze and mid-air gestures are promising for resisting various types of side-channel attacks during authentication. However, to date, a comparison of the different authentication modalities is missing. We investigate multiple authentication mechanisms that leverage gestures, eye gaze, and a multimodal combination of them and study their resilience to shoulder surfing. To this end, we report on our implementation of three schemes and results from usability and security evaluations where we also experimented with fixed and randomized layouts. We found that the gaze-based approach outperforms the other schemes in terms of input time, error rate, perceived workload, and resistance to observation attacks, and that randomizing the layout does not improve observation resistance enough to warrant the reduced usability. Our work further underlines the significance of replicating previous eye tracking studies using today's sensors as we show significant improvement over similar previously introduced gaze-based authentication systems

    Think Harder! Investigating the Effect of Password Strength on Cognitive Load during Password Creation

    Get PDF
    Strict password policies can frustrate users, reduce their productivity, and lead them to write their passwords down. This paper investigates the relation between password creation and cognitive load inferred from eye pupil diameter. We use a wearable eye tracker to monitor the user’s pupil size while creating passwords with different strengths. To assess how creating passwords of different strength (namely weak and strong) influences users’ cognitive load, we conducted a lab study (N = 15). We asked the participants to create and enter 6 weak and 6 strong passwords. The results showed that passwords with different strengths affect the pupil diameter, thereby giving an indication of the user’s cognitive state. Our initial investigation shows the potential for new applications in the field of cognition-aware user interfaces. For example, future systems can use our results to determine whether the user created a strong password based on their gaze behavior, without the need to reveal the characteristics of the password

    Investigating privacy perceptions and subjective acceptance of eye tracking on handheld mobile devices

    Get PDF
    Although eye tracking brings many benefits to users of mobile devices and developers of mobile applications, it poses significant privacy risks to both: the users of mobile devices, and the bystanders that surround users, are within the front-facing camera's field of view. Recent research demonstrates that tracking an individual's gaze reveals personal and sensitive information. This paper presents an investigation of the privacy perceptions and the subjective acceptance of users towards eye tracking on handheld mobile devices. In a four-phase user study (N=17), participants used a smartphone eye tracking app, were interviewed before and after viewing a video showing the amount of sensitive and personal data that could be derived from eye movements, and had their privacy concerns measured. Our findings 1) show factors that influence users' and bystanders' attitudes toward eye tracking on mobile devices such as the algorithms' transparency and the developers' credibility and 2) support designing mechanisms to allow for privacy-aware eye tracking solutions on mobile-devices

    Comparing Dwell Time, Pursuits and Gaze Gestures for Gaze Interaction on Handheld Mobile Devices

    Get PDF
    Gaze is promising for hands-free interaction on mobile devices. However, it is not clear how gaze interaction methods compare to each other in mobile settings. This paper presents the first experiment in a mobile setting that compares three of the most commonly used gaze interaction methods: Dwell time, Pursuits, and Gaze gestures. In our study, 24 participants selected one of 2, 4, 9, 12 and 32 targets via gaze while sitting and while walking. Results show that input using Pursuits is faster than Dwell time and Gaze gestures especially when there are many targets. Users prefer Pursuits when stationary, but prefer Dwell time when walking. While selection using Gaze gestures is more demanding and slower when there are many targets, it is suitable for contexts where accuracy is more important than speed. We conclude with guidelines for the design of gaze interaction on handheld mobile devices

    Are Thermal Attacks Ubiquitous? When Non-Expert Attackers Use Off the Shelf Thermal Cameras

    Get PDF
    Recent work showed that using image processing techniques on thermal images taken by high-end equipment reveals passwords entered on touchscreens and keyboards. In this paper, we investigate the susceptibility of common touch inputs to thermal attacks when non-expert attackers visually inspect thermal images. Using an off-the-shelf thermal camera, we collected thermal images of a smartphone's touchscreen and a laptop's touchpad after 25 participants had entered passwords using touch gestures and touch taps. We show that visual inspection of thermal images by 18 participants reveals the majority of passwords. Touch gestures are more vulnerable to thermal attacks (60.65% successful attacks) than touch taps (23.61%), and attacks against touchscreens are more accurate than on touchpads (87.04% vs 56.02%). We discuss how the affordability of thermal attacks and the nature of touch interactions make the threat ubiquitous, and the implications this has on security

    GazeCast: Using Mobile Devices to Allow Gaze-based Interaction on Public Displays

    Get PDF
    Gaze is promising for natural and spontaneous interaction with public displays, but current gaze-enabled displays require movement-hindering stationary eye trackers or cumbersome head-mounted eye trackers. We propose and evaluate GazeCast – a novel system that leverages users’ handheld mobile devices to allow gaze-based interaction with surrounding displays. In a user study (N = 20), we compared GazeCast to a standard webcam for gaze-based interaction using Pursuits. We found that while selection using GazeCast requires more time and physical demand, participants value GazeCast’s high accuracy and flexible positioning. We conclude by discussing how mobile computing can facilitate the adoption of gaze interaction with pervasive displays

    How Unique do we Move? : Understanding the Human Body and Context Factors for User Identification

    Get PDF
    Past work showed great promise in biometric user identification and authentication through exploiting specific features of specific body parts. We investigate human motion across the whole body, to explore what parts of the body exhibit more unique movement patterns, and are more suitable to identify users in general. We collect and analyze full-body motion data across various activities (e.g., sitting, standing), handheld objects (uni- or bimanual), and tasks (e.g., watching TV or walking). Our analysis shows, e.g., that gait as a strong feature amplifies when carrying items, game activity elicits more unique behaviors than texting on a smartphone, and motion features are robust across body parts whereas posture features are more robust across tasks. Our work provides a holistic reference on how context affects human motion to identify us across a variety of factors, useful to inform researchers and practitioners of behavioral biometric systems on a large scale

    Understanding Shoulder Surfer Behavior and Attack Patterns Using Virtual Reality

    Get PDF
    In this work, we explore attacker behavior during shoulder surfing. As such behavior is often opportunistic and difficult to observe in real world settings, we leverage the capabilities of virtual reality (VR). We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space. In both scenarios, participants shoulder surfed private screens displaying different types of content. From the results we derive an understanding of factors influencing shoulder surfing behavior, reveal common attack patterns, and sketch a behavioral shoulder surfing model. Our work suggests directions for future research on shoulder surfing and can serve as a basis for creating novel approaches to mitigate shoulder surfing

    Exploring the Scalability of Behavioral Mid-air Gestures Authentication.

    No full text
    corecore